4 research outputs found

    Sequential Frame-Interpolation and DCT-based Video Compression Framework

    Get PDF
    Video data is ubiquitous; capturing, transferring, and storing even compressed video data is challenging because it requires substantial resources. With the large amount of video traffic being transmitted on the internet, any improvement in compressing such data, even small, can drastically impact resource consumption. In this paper, we present a hybrid video compression framework that unites the advantages of both DCT-based and interpolation-based video compression methods in a single framework. We show that our work can deliver the same visual quality or, in some cases, improve visual quality while reducing the bandwidth by 10--20%

    Leveraging Image Processing Techniques to Thwart Adversarial Attacks in Image Classification

    No full text
    Deep Convolutional Neural Networks (DCNNs) are vulnerable to images that have been altered with well-engineered and imperceptible perturbations. We propose three color quantization pre-processing techniques to make DCNNs more robust to adversarial perturbation including Gaussian smoothing and PNM color reduction (GPCR), color quantization using Gaussian smoothing and K-means (GK-means), and fast GK-means. We evaluate the approaches on a subset of the ImageNet dataset. Our evaluation reveals that our GK-means-based algorithms have the best top-1 accuracy. We also present the trade-off between GK-means-based algorithms and GPCR with respect to computational time

    FID: Frame Interpolation and Dct-Base d Video Compression

    No full text
    In this paper, we present a hybrid video compression technique that combines the advantages of residual coding techniques found in traditional DCT-based video compression and learning-based video frame interpolation to reduce the amount of residual data that needs to be compressed. Learning-based frame interpolation techniques use machine learning algorithms to predict frames but have difficulty with uncovered areas and non-linear motion. This approach uses DCT-based residual coding only on areas that are difficult for video interpolation and provides tunable compression for such areas through an adaptive selection of data to be encoded. Experimental data for both PSNR and the newer video multi-method assessment fusion (VMAF) metrics are provided. Our results show that we can reduce the amount of data required to represent a video stream compared with traditional video coding while outperforming video frame interpolation techniques in quality
    corecore